Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pack option to the builder options for cloud native buildpacks #916

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

nickhammond
Copy link
Contributor

@nickhammond nickhammond commented Aug 28, 2024

This PR introduces Cloud Native Buildpacks to the list of builder options for Kamal.

This opens up the option to utilize buildpacks instead of writing a Dockerfile from scratch for each app that you want to deploy with Kamal. The end result is still an OCI-compliant docker image that you can run the same as the currently built docker images with Kamal.

You can also use any buildpacks or builders that you'd like so if you prefer some of the Paketo buildpacks instead you can use those too. The example below is utilizing Heroku's builder with the ruby and procfile buildpack which gives you the familiar Heroku build process when you deploy your application. Auto-detection of bundler, cache management of gem and asset installation, and various other features that come along with those.

With this PR you'd need to have pack installed as well as Docker and then within your deploy.yml change your builder to:

builder:
  arch: amd64
  pack:
    builder: heroku/builder:24
    buildpacks:
    - heroku/ruby
    - heroku/procfile

The default process that the buildpack tries to boot is the web process, you can add a Procfile for this:

web: ./bin/docker-entrypoint ./bin/rails server

And lastly, buildpacks don't bind to a default port so you'll either need to set proxy.app_port(Kamal 2.0 / kamal-proxy) to your application port or set your app to use port 80 which is the default kamal-proxy port.

servers:
  web:
    hosts:
      - 123.456.78.9

Buildpacks work in a detect and then build flow. The detect step looks for common files or checks that indicate it is indeed a ruby application by looking for a Gemfile.lock for instance. If the detect step passes then it triggers the build phase which essentially triggers a bundle install in this example.

With heroku/builder:24 so far I've found that the image size is about the same, it's only 2mb off for a 235mb image. Build time is typically faster with pack but depends on how well you've optimized your Dockerfile. The win though is not having to think about how to cache your gem installs, node installs or any other package manager installs that have a buildpack. It's also following the common conventions for building containers and various stumbling blocks that Heroku and others have been blazing through over the years.

Kamal discussion: #795
Heroku discussion: heroku/buildpacks#6
Heroku official buildpacks: https://devcenter.heroku.com/articles/buildpacks
Heroku 3rd party buildpacks: https://elements.heroku.com/buildpacks
Full setup overview: https://www.fromthekeyboard.com/deploying-a-rails-app-with-kamal-heroku-style/

Todos:

  • Companion PR for kamal-site Started in Add a builder example section about buildpacks kamal-site#117
  • Excluded files mention via project.toml
  • Catch up with main
  • Discuss potential for a remote pack option Out of scope
  • Test pack with the git archive context
  • What should kamal build create do when using buildpacks? Just point to the install docs? https://buildpacks.io/docs/for-platform-operators/how-to/integrate-ci/pack/ - Since you don't typically call kamal build create and it's instead called within a build I'm going to close this one out.
  • Update kamal build details to run pack version && pack builder inspect
  • Does kamal build remove need to do anything? Since we're not creating a build context like you normally would with Docker there's nothing to actually remove. Is there anything in the Kamal lifecycle though that we at least need a no-op method for this?

buildpacks,
"-t", config.absolute_image,
"-t", config.latest_image,
"--env", "BP_IMAGE_LABELS=service=#{config.service}",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kamal expects there to be a service label, this automatically adds the label via the paketo-buildpacks/image-labels buildpack.

lib/kamal/commands/builder/native/pack.rb Outdated Show resolved Hide resolved
end

def buildpacks
(pack_buildpacks << "paketo-buildpacks/image-labels").map { |buildpack| ["--buildpack", buildpack] }
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this buildpack automatically so that we can label the image for Kamal

@@ -37,6 +37,16 @@ builder:
arch: arm64
host: ssh://docker@docker-builder

# Buildpack configuration
#
# The build configuration for using pack to build a Cloud Native Buildpack image.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add mention of project.toml to set your excluded options. https://buildpacks.io/docs/for-app-developers/how-to/build-inputs/use-project-toml/

Copy link
Contributor Author

@nickhammond nickhammond Oct 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I was thinking about this and removing context: "." it doesn't matter as much since it's using the git clone. The exclusion list is really only relevant when you're using "." as your build context.

@nickhammond nickhammond changed the title Add a pack option to the builder options for cloud native buildpacks Add pack option to the builder options for cloud native buildpacks Aug 28, 2024
@nickhammond nickhammond marked this pull request as ready for review August 28, 2024 06:02
@alexohre
Copy link

alexohre commented Sep 3, 2024

hey @nickhammond and @dhh, does this change resolve the custom build issue like using the builder of choice. e.g. docker build cloud?

@nickhammond
Copy link
Contributor Author

Hey @alexohre - No, that'll just be a remote builder with the engine pointing to Docker cloud(#914) which would be a different PR. The builders were just reorganized a bit as well so it might be simpler to add an additional option for a remote cloud builder in a different PR.

@alexohre
Copy link

alexohre commented Sep 4, 2024

Hey @alexohre - No, that'll just be a remote builder with the engine pointing to Docker cloud(#914) which would be a different PR. The builders were just reorganized a bit as well so it might be simpler to add an additional option for a remote cloud builder in a different PR.

Oh, thanks for the awareness. I would be glad if you could help me make a PR for that since I don't know how to build gems or modify it for now


private
def platform
"linux/#{local_arches.first}"
Copy link
Contributor Author

@nickhammond nickhammond Sep 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pack only supports building for one platform, make it obvious in docs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add a validation for this in Kamal::Configuration::Validator::Builder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djmb Added a validation for this.

@dhh
Copy link
Member

dhh commented Sep 22, 2024

This is fascinating work, @nickhammond. I'm surprised by how unobtrusive it is! But I'd like to understand the whole flow better. I'm not sure this is going to be all that relevant for Rails apps that now already come with well-optimized Dockerfiles out of the box, but I could see how that may well be different if you're doing a Sinatra app or some app from another framework that doesn't provide that.

Could you show how the entire flow would go with, say, a Sinatra app, using buildpacks, and deploying on something like Digital Ocean? Want to make sure that this isn't tied to any one company or platform.

@nickhammond
Copy link
Contributor Author

Hey @dhh, thanks for taking a look!

I think adding support for buildpacks will be great for the adoption of Kamal but you can always still reach for the sharper tool(a full Dockerfile) when needed.

I built out a few hello world examples, the main thing is just making sure your app boots on port 80 for kamal-proxy or just ensuring that you set your app_port if it doesn't. This isn't a buildpack-specific thing but more of a change that came with kamal-proxy, packs don't export a port by default.

Here are the hello world apps that I built and tested out on Digital Ocean and wrote a more detailed overview for the whole process as well.

@hone
Copy link

hone commented Sep 25, 2024

@nickhammond thanks for all your investigations and opening this PR. ❤️

Want to make sure that this isn't tied to any one company or platform.

@dhh 👋 It's been a while. As Cloud Native Buildpacks (CNB) maintainer I'm biased and would love to see this supported in kamal. :)

Nick touches on this in his blog, but if it's any assurance CNB as an upstream project is a CNCF Incubation project which pushes for not being a single vendor OSS project. In fact, the project was started from the get go by two companies, Heroku & Pivotal. It's really about bringing that Heroku magic to container image building, transforming your app source code into an OCI image (No Dockerfile needed). You can push image to a registry, docker run it locally, or even use it as a base image in the FROM directive of a Dockerfile. If you don't want to use the Heroku builder and buildpacks, there`s the Paketo ones, and you can also write your own.

@nickhammond
Copy link
Contributor Author

Started on the docs in this kamal-site PR basecamp/kamal-site#117.

Copy link
Collaborator

@djmb djmb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nickhammond - I noticed in your sample apps, that you've set the context for the builder to ., which avoids using the git clone for building.

Is that just a preference or is there any reason it would be required?


private
def platform
"linux/#{local_arches.first}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add a validation for this in Kamal::Configuration::Validator::Builder.

@@ -33,7 +34,7 @@ def info
end

def inspect_builder
docker :buildx, :inspect, builder_name unless docker_driver?
docker :buildx, :inspect, builder_name unless docker_driver? || pack?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could extract a buildx? method here?

def buildx?
  !docker_driver? && !pack?
end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djmb We could also run pack builder inspect which returns a bunch of information about the default builder. It's a lot of information but might be useful to help triage if you're not sure what builder you're using. The Pack CLI lets you set your default builder so I have mine set to heroku/builder:24 via pack config default-builder heroku/builder:24

Copy link

@alexohre alexohre Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nickhammond Does this mean I can now pass my builder name to kamal?

Buildx cloud_builder

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexohre No, I don't think there's a PR open for that, just the discussion here #914 (comment)

"-t", config.absolute_image,
"-t", config.latest_image,
"--env", "BP_IMAGE_LABELS=service=#{config.service}",
*argumentize("--env", secrets, sensitive: true),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is using environment variables the standard way to get secrets into a buildpack?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djmb Yes, they only have the --env flag.

I just tested building with a few secrets because I was concerned they'd end up in the final image but they don't.

I just found this in the docs site though. TLDR; It's just a build-time env var, they're not available at image runtime. So they're naturally "secret", neat.

https://buildpacks.io/docs/for-platform-operators/how-to/integrate-ci/pack/cli/pack_build/#options

  -e, --env stringArray              Build-time environment variable, in the form 'VAR=VALUE' or 'VAR'.
                                     When using latter value-less form, value will be taken from current
                                       environment at the time this command is executed.
                                     This flag may be specified multiple times and will override
                                       individual values defined by --env-file.
                                     Repeat for each env in order (comma-separated lists not accepted)
                                     NOTE: These are NOT available at image runtime.

@nickhammond
Copy link
Contributor Author

@nickhammond - I noticed in your sample apps, that you've set the context for the builder to ., which avoids using the git clone for building.

Is that just a preference or is there any reason it would be required?

@djmb I usually just use the old context style when initially getting a project going since it's a bit faster. I've removed the context on all of the sample apps and they're all deploying successfully.

To be honest though I'm not super familiar with how the clone process actually works. I'm passing in the same builder_context to the pack CLI that's currently being done with the Docker build and for both I'm seeing the context as "." in the output.

Full pack command that's running:

  INFO [d6b21e93] Running /usr/bin/env pack build nickhammond/hotdonuts --platform linux/amd64 --builder heroku/builder:24 --buildpack heroku/ruby --buildpack heroku/procfile --buildpack paketo-buildpacks/image-labels -t nickhammond/hotdonuts:eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 -t nickhammond/hotdonuts:latest --env BP_IMAGE_LABELS=service=hotdonuts --path . && docker push nickhammond/hotdonuts:eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 && docker push nickhammond/hotdonuts:latest as n@localhost

Vs. Docker command:

  INFO [e3db1e68] Running docker buildx build --push --platform linux/amd64 --builder kamal-local-docker-container -t nickhammond/hotdonuts:eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 -t nickhammond/hotdonuts:latest --label service="hotdonuts" --file Dockerfile . as n@localhost

With both though I'm seeing the clone steps, does it clone into that temp directory and then drop into it?

  INFO Cloning repo into build directory `/var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/`...
  INFO [a351ef89] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e clone /Users/n/src/hotdonuts-sinatra --recurse-submodules as n@localhost
  INFO Resetting local clone as `/var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/` already exists...
  INFO [a0986137] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ remote set-url origin /Users/n/src/hotdonuts-sinatra as n@localhost
  INFO [a0986137] Finished in 0.005 seconds with exit status 0 (successful).
  INFO [13044d07] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ fetch origin as n@localhost
  INFO [13044d07] Finished in 0.019 seconds with exit status 0 (successful).
  INFO [05383764] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ reset --hard eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 as n@localhost
  INFO [05383764] Finished in 0.007 seconds with exit status 0 (successful).
  INFO [97641014] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ clean -fdx as n@localhost
  INFO [97641014] Finished in 0.006 seconds with exit status 0 (successful).
  INFO [8b0ae09b] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ submodule update --init as n@localhost
  INFO [8b0ae09b] Finished in 0.047 seconds with exit status 0 (successful).
  INFO [753722cb] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ status --porcelain as n@localhost
  INFO [753722cb] Finished in 0.006 seconds with exit status 0 (successful).
  INFO [0e2a42fa] Running /usr/bin/env git -C /var/folders/6q/53gfp0q92gngndncp5mmk9cr0000gn/T/kamal-clones/hotdonuts-39f3a8537243e/hotdonuts-sinatra/ rev-parse HEAD as n@localhost
  INFO [0e2a42fa] Finished in 0.005 seconds with exit status 0 (successful).
  INFO [7ba57d96] Running /usr/bin/env  as n@localhost
  INFO [7ba57d96] Finished in 0.002 seconds with exit status 0 (successful).
  INFO [d6b21e93] Running /usr/bin/env pack build nickhammond/hotdonuts --platform linux/amd64 --builder heroku/builder:24 --buildpack heroku/ruby --buildpack heroku/procfile --buildpack paketo-buildpacks/image-labels -t nickhammond/hotdonuts:eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 -t nickhammond/hotdonuts:latest --env BP_IMAGE_LABELS=service=hotdonuts --path . && docker push nickhammond/hotdonuts:eca189d62a8c2ba97fdfca5f85699a50a6d50ce4 && docker push nickhammond/hotdonuts:latest as n@localhost

"-t", config.absolute_image,
"-t", config.latest_image,
"--env", "BP_IMAGE_LABELS=service=#{config.service}",
*argumentize("--env", args),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With docker, build args are passed as --build-arg and with Kamal you set them via:

  args:
    ENVIRONMENT: production

You'd still set "build args" with pack via the same args section but they ultimately get passed as --env to the pack command. Trying to reduce confusion of when to use env/arg if you're testing out your builds.

docker(:push, config.latest_image)
end

def remove;end
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not actually creating anything with the buildpack setup so there isn't anything to remove. Should we still puts something out to the user?

@nickhammond
Copy link
Contributor Author

nickhammond commented Oct 30, 2024

Highlighting some build time specs with a traditional build with a Dockerfile vs. buildpacks.

I ran these on an M2 Max, 64GB, Docker is capped at 4GB RAM and 4 CPUs. All timing is based on running time bundle exec kamal build push. Using context: "." since I'm just testing this locally, the final image runs in both scenarios.

Buildpacks

builder:
  arch: amd64
  context: "."
  pack:
    builder: "heroku/builder:24"
    buildpacks:
      - heroku/ruby
      - heroku/procfile
Full pack command that Kamal is running
/usr/bin/env pack build nickhammond/hey --platform linux/amd64 \
--creation-time now \
--builder heroku/builder:24 \
--buildpack heroku/ruby \
--buildpack heroku/procfile \
--buildpack paketo-buildpacks/image-labels \
-t nickhammond/hey:e2765756a4ba2cd6dc367302b114ec3757a033f0 \
-t nickhammond/hey:latest-production \
--env BP_IMAGE_LABELS=service=hey \
--path . && \
docker push nickhammond/hey:e2765756a4ba2cd6dc367302b114ec3757a033f0 && \
docker push nickhammond/hey:latest-production
  • Fresh build with no build cache: 2.31s user 0.83s system 2% cpu 2:05.44 total
  • No code changes, just rebuilding: 2.22s user 0.71s system 7% cpu 39.932 total
  • Code changes(app/model change): 2.60s user 0.77s system 8% cpu 40.892 total
  • Gemfile(added rest-client): 2.24s user 0.67s system 5% cpu 51.597 total
  • Compressed size as stated on Docker Hub 231MB

Dockerfile (The one that ships with Rails - MySQL, Ruby 3.3.5, Rails main, included below)

builder:
  arch: amd64
  context: "."
Dockerfile
# syntax=docker/dockerfile:1
# check=error=true

# This Dockerfile is designed for production, not development. Use with Kamal or build'n'run by hand:
# docker build -t main_app .
# docker run -d -p 80:80 -e RAILS_MASTER_KEY=<value from config/master.key> --name main_app main_app

# For a containerized dev environment, see Dev Containers: https://guides.rubyonrails.org/getting_started_with_devcontainer.html

# Make sure RUBY_VERSION matches the Ruby version in .ruby-version
ARG RUBY_VERSION=3.3.5
FROM docker.io/library/ruby:$RUBY_VERSION-slim AS base

# Rails app lives here
WORKDIR /rails

# Install base packages
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y curl default-mysql-client libjemalloc2 libvips && \
    rm -rf /var/lib/apt/lists /var/cache/apt/archives

# Set production environment
ENV RAILS_ENV="production" \
    BUNDLE_DEPLOYMENT="1" \
    BUNDLE_PATH="/usr/local/bundle" \
    BUNDLE_WITHOUT="development"

# Throw-away build stage to reduce size of final image
FROM base AS build

# Install packages needed to build gems
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y build-essential default-libmysqlclient-dev git pkg-config && \
    rm -rf /var/lib/apt/lists /var/cache/apt/archives

# Install application gems
COPY Gemfile Gemfile.lock vendor ./

RUN bundle install && \
    rm -rf ~/.bundle/ "${BUNDLE_PATH}"/ruby/*/cache "${BUNDLE_PATH}"/ruby/*/bundler/gems/*/.git && \
    bundle exec bootsnap precompile --gemfile

# Copy application code
COPY . .

# Precompile bootsnap code for faster boot times
RUN bundle exec bootsnap precompile app/ lib/

# Precompiling assets for production without requiring secret RAILS_MASTER_KEY
RUN SECRET_KEY_BASE_DUMMY=1 ./bin/rails assets:precompile




# Final stage for app image
FROM base

# Copy built artifacts: gems, application
COPY --from=build "${BUNDLE_PATH}" "${BUNDLE_PATH}"
COPY --from=build /rails /rails

# Run and own only the runtime files as a non-root user for security
RUN groupadd --system --gid 1000 rails && \
    useradd rails --uid 1000 --gid 1000 --create-home --shell /bin/bash && \
    chown -R rails:rails db log storage tmp
USER 1000:1000

# Entrypoint prepares the database.
ENTRYPOINT ["/rails/bin/docker-entrypoint"]

# Start server via Thruster by default, this can be overwritten at runtime
EXPOSE 80
CMD ["./bin/thrust", "./bin/rails", "server"]

  • Fresh build with no build cache: 0.91s user 0.72s system 0% cpu 3:07.55 total
  • No code changes, just rebuilding: 0.48s user 0.31s system 14% cpu 5.282 total
  • Code changes(app/model change): 0.55s user 0.38s system 2% cpu 33.586 total
  • Gemfile(added rest-client): 0.64s user 0.46s system 1% cpu 1:27.40 total
  • Compressed size as state on Docker Hub 237MB

A few takeaways:

  • The main difference between the initial build with a Dockerfile is that it's doing a bunch of apt installs. The Heroku buildpack already has these built and baked in.
  • Heroku buildpacks take a hit for fetching the latest buildpack details which is why the "No code change" run is so much faster(39s vs. 5s) with a Dockerfile. @edmorley Is there a way to pin your buildpack version to a SHA so it doesn't try to fetch on each build? I always see this which takes about 10-15 seconds:
 DEBUG [da5ec3f9]       24: Pulling from heroku/builder
 DEBUG [da5ec3f9]       Digest: sha256:eebd3baebb92ac69437aab9a66f0c169be15aaa24c7e67e7ea3664a8442b54fd
 DEBUG [da5ec3f9]       Status: Image is up to date for heroku/builder:24
 DEBUG [da5ec3f9]       24: Pulling from heroku/heroku
 DEBUG [da5ec3f9]       Digest: sha256:e8884bb60ec847d4b461da844dbe7a802f704c88d6fedb1ad9c4d4294765e443
 DEBUG [da5ec3f9]       Status: Image is up to date for heroku/heroku:24
  • For the Gemfile change, the default Dockerfile doesn't include a cache volume for gems which results in re-installing all of the gems for the application. This is of course mostly fixable with a cache mount and modifying your bundle install command.
Caching your bundle install with a Dockerfile
RUN --mount=type=cache,id=bld-gem-cache-3-2,sharing=locked,target=/srv/vendor \
    gem install bundler -v 2.4.22 && \
    bundle config set app_config .bundle && \
    bundle config set path /srv/vendor && \
    bundle config set deployment 'true' && \
    bundle config set without 'development test toolbox' && \
    bundle install --jobs 8 && \
    bundle clean && \
    mkdir -p vendor && \
    bundle config set --local path vendor && \
    cp -ar /srv/vendor . && \
    rm -rf vendor/ruby/*/cache vendor/ruby/*/bundler/gems/*/.git && \
    find vendor/ruby/*/gems/ -name "*.c" -delete && \
    find vendor/ruby/*/gems/ -name "*.o" -delete
Heroku's buildpack automatically utilizing a cache for your gems
 DEBUG [796edaf0]       [builder] - Ruby version `3.3.5` from `Gemfile.lock`
 DEBUG [796edaf0]       [builder]   - Using cached Ruby version
 DEBUG [796edaf0]       [builder] - Bundler version `2.5.3` from `Gemfile.lock`
 DEBUG [796edaf0]       [builder]   - Using cached version
 DEBUG [796edaf0]       [builder] - Bundle install
 DEBUG [796edaf0]       [builder]   - Loading cached gems
 DEBUG [796edaf0]       [builder]   - changes detected in '/workspace/Gemfile.lock' and '/workspace/Gemfile'
 DEBUG [796edaf0]       [builder]   - Running `BUNDLE_BIN="/layers/heroku_ruby/gems/bin" BUNDLE_CLEAN="1" BUNDLE_DEPLOYMENT="1" BUNDLE_GEMFILE="/workspace/Gemfile" BUNDLE_PATH="/layers/heroku_ruby/gems" BUNDLE_WITHOUT="development:test" bundle install`
 DEBUG [796edaf0]       [builder]
 DEBUG [796edaf0]       [builder]       Fetching gem metadata from https://rubygems.org/........
 DEBUG [796edaf0]       [builder]       Fetching domain_name 0.6.20240107
 DEBUG [796edaf0]       [builder]       Fetching http-accept 1.7.0
 DEBUG [796edaf0]       [builder]       Fetching mime-types-data 3.2024.1001
 DEBUG [796edaf0]       [builder]       Fetching netrc 0.11.0
 DEBUG [796edaf0]       [builder]       Installing http-accept 1.7.0
 DEBUG [796edaf0]       [builder]       Installing domain_name 0.6.20240107
 DEBUG [796edaf0]       [builder]       Fetching http-cookie 1.0.7
 DEBUG [796edaf0]       [builder]       Installing mime-types-data 3.2024.1001
 DEBUG [796edaf0]       [builder]       Installing netrc 0.11.0
 DEBUG [796edaf0]       [builder]       Fetching mime-types 3.6.0
 DEBUG [796edaf0]       [builder]       Installing http-cookie 1.0.7
 DEBUG [796edaf0]       [builder]       Installing mime-types 3.6.0
 DEBUG [796edaf0]       [builder]       Fetching rest-client 2.1.0
 DEBUG [796edaf0]       [builder]       Installing rest-client 2.1.0
 DEBUG [796edaf0]       [builder]       Bundle complete! 26 Gemfile dependencies, 108 gems now installed.
 DEBUG [796edaf0]       [builder]       Gems in the groups 'development' and 'test' were not installed.
 DEBUG [796edaf0]       [builder]       Bundled gems are installed into `/layers/heroku_ruby/gems`
 DEBUG [796edaf0]       [builder]
 DEBUG [796edaf0]       [builder]   - Done (7.552s)

@edmorley
Copy link

edmorley commented Oct 30, 2024

  • Heroku buildpacks take a hit for fetching the latest buildpack details which is why the "No code change" run is so much faster(39s vs. 5s) with a Dockerfile. @edmorley Is there a way to pin your buildpack version to a SHA so it doesn't try to fetch on each build? I always see this which takes about 10-15 seconds:

Hi! pack build checking for an updated image should take at most 1-2 seconds to make a couple of requests to the registry, not 15? Are you sure there isn't something else at play? (If the local images are up to date, nothing is actually pulled after the update check.)

If needed, the update check itself can be skipped using pack build ... --pull-policy if-not-present - though you would then need to add a separate way to periodically update the builder+run images (unless the nodes are periodically replaced guaranteeing an image pull at least every few days or similar, to pick up any security updates etc).

@nickhammond
Copy link
Contributor Author

@edmorley You're right, I found the pull-policy setting and set that to if-not-present. When running with that and --verbose though I'm still seeing a bit of a lag. 10-15s is probably a bit high but somewhere between 5-10s where it stalls and then finally mentions Warning: Builder is trusted but additional modules were added; using the untrusted (5 phases) build flow. Is that an expected amount of time to prepare the pack to build?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants